Search Results: "igor"

30 May 2013

Daniel Pocock: Free JavaScript, Debian and Drupal

Hot on the heels of my announcement about the benefits of combining free JavaScript on Debian with Drupal's libraries module, the FSF has launched a high profile campaign directed at public web sites using non-free JavaScript. It's excellent to see that the rigorous policies used by Debian and the Drupal project, such as the Debian Free Software Guidelines and Drupal's use of the GPL have provided a turn-key solution that web publishers can go to in order to give the best possible experience to their end users.

23 March 2013

Dirk Eddelbuettel: Rcpp 0.10.3

A new relase 0.10.3 of Rcpp is now on CRAN and in Debian. This is the fourth release in the 0.10.* series, and further extends and solidifies the excellent Rcpp attributes. A few other bugs were fixed as well, and support for wide character strings has been added. We once again tested this fairly rigorously by checking against 86 of the 100 CRAN packages depending on Rcpp. All of these passed. So we do not expect any issues with dependent packages, but one never knows. The complete NEWS entry for 0.10.3 is below; more details are in the ChangeLog file in the package and on the Rcpp Changelog page.
Changes in Rcpp version 0.10.3 (2013-03-23)
  • Changes in R code:
    • Prevent build failures on Windowsn when Rcpp is installed in a library path with spaces (transform paths in the same manner that R does before passing them to the build system).
  • Changes in Rcpp attributes:
    • Rcpp modules can now be used with sourceCpp
    • Standalone roxygen chunks (e.g. to document a class) are now transposed into RcppExports.R
    • Added Rcpp::plugins attribute for binding directly to inline plugins. Plugins can be registered using the new registerPlugin function.
    • Added built-in cpp11 plugin for specifying the use of C++11 in a translation unit
    • Merge existing values of build related environment variables for sourceCpp
    • Add global package include file to RcppExports.cpp if it exists
    • Stop with an error if the file name passed to sourceCpp has spaces in it
    • Return invisibly from void functions
    • Ensure that line comments invalidate block comments when parsing for attributes
    • Eliminated spurious empty hello world function definition in Rcpp.package.skeleton
  • Changes in Rcpp API:
    • The very central use of R API R_PreserveObject and R_ReleaseObject has been replaced by a new system based on the functions Rcpp_PreserveObject, Rcpp_ReleaseObject and Rcpp_ReplaceObject which shows better performance and is implemented using a generic vector treated as a stack instead of a pairlist in the R implementation. However, as this preserve / release code is still a little rough at the edges, a new #define is used (in config.h) to disable it for now.
    • Platform-dependent code in Timer.cpp now recognises a few more BSD variants thanks to contributed defined() test suggestions
    • Support for wide character strings has been added throughout the API. In particular String, CharacterVector, wrap and as are aware of wide character strings
Thanks to CRANberries, you can also look at a diff to the previous release 0.10.2. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page

22 February 2013

Richard Hartmann: Finland I

Finland. Helsinki, Lahti, streets Arriving at Helsinki airport, we filed a claim with Lufthansa as a hard shell suitcase had a splintered corner. We were surprised that so many Finns arrived from Munich with skis, more on that later. We picked up our car and started on our way towards Koli; driving with a top speed of 100 km/h and often being limited to 80 km/h or even 60 km/h is... unusual... Finnish police/authorities seem to be obsessed with enforcing those speed limits as there are a lot of speed cameras along the way. Finnish people seem to be similarly obsessed with slot machines; there is an incredible amount of them at gas stations and a constant stream of people playing them. From an outsider's perspective, it's weird that a country as strict about one form of addiction, alcohol, and working against it vigorously, by means of taxes, would allow another form of addiction, gambling, run as freely and this allow so many slot machines. Speaking of taxes on alcohol: a single 0.33 l bottle of beer is more expensive in a Finnish supermarket than 0.5 l of beer in a German restaurant. Which also explains why supermarkets tend to have a rather large section with relatively cheap alcohol free beer. Anyway, coming back to streets: Highway intersections don't have continuous on/off ramps from which you change from one highway to another; you drive off of the highway, stop at a traffic light, and then continue onto the other highway. Weird system, but given the amount of traffic we witnessed, it's probably Good Enough (tm). Stopping for a short time in Lohti simply because it's apparently famous for winter sports competitions, we arrived at Future Freetime in Koli national park after about five to six gruelling hours of net driving through somewhat bad weather and behind slow drivers. Koli Hiking up to Ukko-Koli and its sister peaks proved to be rather exhausting as we kept on breaking through the snow cover to our knees and sometimes even our hips. Once we were up there, we realized that even though you couldn't see it in between the trees, there was fog all over the plains so we couldn't see anything. Still, it was a nice hike even if somewhat short. Note to self: Even when a trail is marked locally, if OpenStreetMap does not know about it... don't walk along it. Especially not when the going's rough already. And if there's a sign suggesting you wear snow shoes... wear snow shoes. Returning to Koli Hotel and the museum next to it, we walked over to the ski slope. The highest peak within Koli,Ukko-Koli, is 347 meters high, the local ski slope starts a good way below that. This would explain why a lot of Finns came back from Munich with their skis... Afterwards, we rented a snow mobile, without guide or supervision, and drove from Loma-Koli over lake Pielien towards Purnuniemi and in a large circle down towards lake Ryyn skyl where we turned around and went back the same way. If we thought Finnish streets don't have a lot of signs we soon realized that snow mobile tracks have even less. There are at most two or three signs pointing you in the right direction, but on the plus side, there are no posted speed limits for snow mobiles, either. In somewhat related news, snow mobiles can go at least 95 km/h. At that point, the scratched and dirty visor of your rental helmet will keep slamming down, forcing you to take one hand off the handle and thus stop accelerating to maintain stability. To round off the day, we heated up the sauna built into our little wooden hut. Running outside three times to rub myself off with snow from head to toes, I almost slipped and fell while standing still. When your feet are too hot for the snowy ground, you'll start to melt your own little pools of slippery water/snow mush within seconds. File that one under "I would never have guessed unless I had experienced it myself". Generic The MarkDown source of this blog post is not even 5 kiB in size; even in a worst case scenario, pushing this to my ikiwiki instance via git will eat up less 10 kiB of mobile data. Which is good because I have 78 MiB of international data left on this plan. This is also the reason why there are no links in this blog post: I am writing everything off line and don't want to search for the correct URLs to link to. I really wish EU regulators would start to tackle data roaming now that SMS and voice calls are being forced down into somewhat sane pricing regions by regulations. PS:
-rw-r--r-- 1 richih richih 4.6K Feb 11 22:55 11-Finland-I.mdwn
[...]
Writing objects: 100% (7/7), 2.79 KiB, done.

15 February 2013

Francesca Ciceri: The DPL game

In his latest bits from the DPL, Stefano wrote:
I'd like to respond (also) here to inquiries I'm receiving these days: I will not run again as DPL. So you have about 20 days to mob\^Wconvince other DDs to run, or decide to run yourself. Do not to wait for the vary last minute, as that makes for lousy campaigns.
Ladies and gentlemen, I am pleased to present you... THE DPL GAME GOALS:
The goal of the game is to let people know you think they'd be nice DPLs.
The point is not to pressure them, but to let them know they're awesome and make them at least consider the idea to run for DPL. The winners are those who have at least one of their Fantastic Four running for DPL. Bonus points if one of them ends being the next DPL. RULES:
Name three persons (plus a reserve, just in case) you'd like to see as candidates for DPL. Publicly list them (on your blog or on identi.ca using the hashtag #DPLgame) or at least let them know that you'd like to have them as candidate for DPL (via private mail).
You may want to add a couple of lines explaining the rationale for your choices. AGE:
0-99 NUMBER OF PLAYERS:
The more the merrier Some suggestions on how to play:
First think of the qualities a DPL needs to do, in your opinion, a good job. Then look around you: the people you work with, the people you see interact on mailing list, etc. There must be someone with those qualities.
Here are my Fantastic Four (in rigorous alphabetic order): In my opinion, they all more or less have: enthusiasm, a general understanding of dynamics inside the project and of various technical sides of the project itself, ability to delegate and coordinate with different people (inside and outside the project), good communication skills and some diplomacy and ability in de-escalating conflicts. These are people I worked with or I observed working and discussing on mailing lists, and I think they'd do a good job. But -hey!- we are almost a thousand of developers and you cannot possibly know everyone or observe all the people who work in the various teams. This is why you should pick your four names!

6 February 2013

Biella Coleman: Edward Tufte was a phreak

It has been so very long since I have left a trace here. I guess moving to two new countries (Canada and Quebec), starting a new job, working on Anonymous, and finishing my first book was a bit much. I miss this space, not so much because what I write here is any good. But it a handy way for me to keep track of time and what I do and even think. My life feels like a blur at times and hopefully here I can see its rhythms and changes a little more clearly if I occasionally jot things down here. So I thought it would nice to start with something that I found surprising: famed information designer, Edward Tufte, a professor emeritus at Yale was a phone phreak (and there is a stellar new book on the topic by former phreak Phil Lapsley. He spoke about his technological exploration during a sad event, a memorial service in NYC which I attended for the hacker and activist Aaron Swartz. I had my wonderful RA transcribe the speech, so here it is [we may not have the right spelling for some of the individuals so please let us know of any mistakes]:
Edward Tufte s Speech From Aaron Swartz s Memorial
Speech starts 41:00 [video cuts out in beginning]
We would then meet over the years for a long talk every now and then, and my responsibility was to provide him with a reading list, a reading list for life and then about two years ago Quinn had Aaron come to Connecticut and he told me about the four and a half million downloads of scholarly articles and my first question is, Why isn t MIT celebrating this? .
[Video cuts out again]
Obviously helpful in my career there, he then became president of the Mellon foundation, he then retired from the Mellon foundation, but he was asked by the Mellon foundation to handle the problem of JSTOR and Aaron. So I wrote Bill Bullen(sp?) an email about it, I said first that Aaron was a treasure and then I told a personal story about how I had done some illegal hacking and been caught at it and what happened. In 1962, my housemate and I invented the first blue box, that s a device that allows for free, undetectable, unbillable long distance telephone calls. And we got this up and played around with it and the end of our research came when we concluded what was the longest long distance call ever made, which was from Palo Alto to New York time-of-day via Hawaii, well during our experimentation, AT&T, on the second day it turned out, had tapped our phone and uh but it wasn t until about 6 months later when I got a call from the gentleman, AJ Dodge, senior security person at AT&T and I said, I know what you re calling about. and so we met and he said You what you are doing is a crime that would , you know all that. But I knew it wasn t serious because he actually cared about the kind of engineering stuff and complained that the tone signals we were generating were not the standard because they record them and play them back in the network to see what numbers they we were that you were trying to reach, but they couldn t break though the noise of our signal. The upshot of it was that uh oh and he asked why we went off the air after about 3 months, because this was to make long distance telephone calls for free and I said this was because we regarded it as an engineering problem and we made the longest long distance call and so that was it. So the deal was, as I explained in my email to Bill Bullen, that we wouldn t try to sell this and we were told, I was told that crime significance would pay a great deal for this, we wouldn t do any more of it and that we would turn our equipment over to AT&T, and so they got a complete vacuum tube isolator kit for making long distance phone calls. But I was grateful for AJ Dodge and I must say, AT&T that they decided not to wreck my life. And so I told Bill Bullen that he had a great opportunity here, to not wreck somebody s life, course he thankfully did the right thing.
Aaron s unique quality was that he was marvelously and vigorously different. There is a scarcity of that. Perhaps we can be all a little more different too.
Thank you very much.

21 December 2012

Dirk Eddelbuettel: Rcpp 0.10.2

Relase 0.10.2 of Rcpp provides the second update to the 0.10.* series, and has arrived on CRAN and in Debian. It brings another great set of enhancements and extensions, building on the recent 0.10.0 and 0.10.1 releases. The new Rcpp attributes were rewritten to not require Rcpp modules (as we encountered on issue with exceptions on Windows when built this way), code was reorganized to significantly accelerate compilation and a couple of new things such as more Rcpp sugar goodies, a new timer class, and a new string class were added. See below for full details. We also tested this fairly rigorously by checking about two thirds of the over 90 CRAN packages depending on Rcpp (and the remainder required even more package installs which we did not do as this was already taking about 12 total cpu hours to test). We are quite confident that no changes are required (besides one in our own RcppClassic package which we will update. The complete NEWS entry for 0.10.2 is below; more details are in the ChangeLog file in the package and on the Rcpp Changelog page.
Changes in Rcpp version 0.10.2 (2012-12-21)
  • Changes in Rcpp API:
    • Source and header files were reorganized and consolidated so that compile time are now significantly lower
    • Added additional check in Rstreambuf deletetion
    • Added support for clang++ when using libc++, and for anc icpc in std=c++11 mode, thanks to a patch by Yan Zhou
    • New class Rcpp::String to facilitate working with a single element of a character vector
    • New utility class sugar::IndexHash inspired from Simon Urbanek's fastmatch package
    • Implementation of the equality operator between two Rcomplex
    • RNGScope now has an internal counter that enables it to be safely used multiple times in the same stack frame.
    • New class Rcpp::Timer for benchmarking
  • Changes in Rcpp sugar:
    • More efficient version of match based on IndexHash
    • More efficient version of unique base on IndexHash
    • More efficient version of in base on IndexHash
    • More efficient version of duplicated base on IndexHash
    • More efficient version of self_match base on IndexHash
    • New function collapse that implements paste(., collapse= "" )
  • Changes in Rcpp attributes:
    • Use code generation rather than modules to implement sourceCpp and compileAttributes (eliminates problem with exceptions not being able to cross shared library boundaries on Windows)
    • Exported functions now automatically establish an RNGScope
    • Functions exported by sourceCpp now directly reference the external function pointer rather than rely on dynlib lookup
    • On Windows, Rtools is automatically added to the PATH during sourceCpp compilations
    • Diagnostics are printed to the console if sourceCpp fails and C++ development tools are not installed
    • A warning is printed if when compileAttributes detects Rcpp::depends attributes in source files that are not matched by Depends/LinkingTo entries in the package DESCRIPTION
Thanks to CRANberries, you can also look at a diff to the previous release 0.10.1. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page

Daniel Kahn Gillmor: libasound2-plugins is a resource hog!

I run mpd on debian on "igor", an NSLU2 -- a very low-power ~266MHz armel machine, with no FPU and a scanty 32MiB of RAM. This serves nicely to feed my stereo with music that is controllable from anywhere on my LAN. When playing music and talking to a single mpd client, the machine is about 50% idle. However, during a recent upgrade, something wanted to pull in pulseaudio, which in turn wanted to pull in libasound2-plugins, and i distractedly (foolishly) let it. With that package installed, after an mpd restart, the CPU was completely thrashed (100% utilization) and music only played in stutters of 1 second interrupted by a couple seconds of silence. igor was unusable for its intended purpose. Getting rid of pulseaudio was my first attempt to fix the stuttering, but the problem remained even after pulse was all gone and mpd was restarted. Then i did a little search of which packages had been freshly installed in the recent run:
grep ' install .* <none> ' /var/log/dpkg.log
and used that to pick out the offending package. After purging libasound2-plugins and restarting mpd, the igor is back in action. Lesson learned: on low-overhead machines, don't allow apt to install recommends!
echo 'APT::Install-Recommends "0";' >> /etc/apt/apt.conf
And it should go without saying, but sometimes i get sloppy: i need to pay closer attention during an "apt-get dist-upgrade" Tags: alsa, apt, low-power, mpd

9 November 2012

Gunnar Wolf: Road trip to ECSL 2012 in Guatemala

Encuentro Centroamericano de Software Libre! Guatemala! During a national (for us) holiday, so it's easy to go without missing too much work time! How would I miss the opportunity? Several years ago, I started playing with the idea of having a road trip Probably this was first prompted with the UK crew and the Three Intrepid Motorcycle Riders arriving by land to DebConf 9 I don't know. Fact is, I wanted to go to DebConf10 in New York by land, as well as to DebConf12 in Nicaragua. Mostly due to a lack of time, I didn't Although we did start making some longish trips. Of course, my desire to show Regina what Mexico is like also helped! So, up until a week ago, our (according to my standards) long distance driving experience included:
  • M xico Guanajuato Puerto Vallarta Guanajuato M xico, in early November 2011, for Festival de Software Libre and with Regina and our Blender friends Octavio and Claudia. Totalling almost 1900Km, mostly consisting of wide, toll highway.
  • M xico Xilitla San Luis Potos M xico, in April 2012, just for fun and for a nice short vacation, alone with Regina. Totalling almost 1200Km, but through Sierra Gorda de Quer taro, a very tough stretch of about 250Km which we did at about 50Km/h on average. Beautiful route for sure! We didn't originally intend to go through San Luis Potos , and it does not appear to make much sense, as it adds ~350Km to the total, but it was even quicker than going back by the same route and according to those who now, even faster than our planned route, via Tamazunchale and Ixmiquilpan!
  • M xico San Luis Potos Zacatecas Aguascalientes Guanajuato M xico, in May 2012, for Congreso Internacional de Software Libre, again with Octavio and Claudia. Totalling 1250Km, and following very good roads, although most of them were toll-free.
But there is always a certain halo over crossing a border, maybe more so in countries as large as Mexico. We convinced Pooka and Moni, and granted, with some aprehension, as we knew of some important security risks in the more rural areas we wanted to go through we decided to go to Guatemala. And, although we wanted to go with a bit more time, Real Life took its toll: We could not take more time than the intersection of what our respective jobs offered. So, here goes a short(?) recap of our six day long, 3200Km trip. Of course, we have a map detailing this. Mexico Veracruz I came to my office early on Wednesday (31-oct), and left with Regina around 10AM towards Veracruz. We agreed to meet there with Moni and Pooka, who would take the night bus, and continue together. Crossing Mexico City proved to be the longest obstacle We arrived to Veracruz already past 3PM, and spent a nice evening walking down the center and port of the city. Veracruz port can still be seen as part of central Mexico; I knew the road quite well. Veracruz San Andr s Tuxtla Catemaco San Cristobal de las Casas We met with our friends at the iconic Gran Caf de la Parroquia at 6:30AM. Had a nice breakfast with coffee, and by 7:30 we were heading south-west. The reason to have a road trip was to get to know the route, to enjoy the countryside So, given we "only" had to make 650Km this day, we took the non-toll road A narrow path stretching along the coastal plains of Veracruz, until Acayucan. Doing so, we also saved some money, as the equivalent toll road is around MX$300 (~US$25)! Veracruz is a hot state. We ended up all sweaty and tired by 19:00, when we reached San Cristobal. We had agreed not to drive at night, due to security issues, but fortunately there was quite a bit of traffic both ways between Tuxtla Guti rrez (Chiapas State capital, around 1hr from San Cristobal, where darkness got us) and our destination, so we carried on. Now, San Cristobal is a high city, almost as high as Mexico City (2100m), and being more humid, it was quite chilly. We went for a walk, and were convinced that at a later time, we had to stay for several days there. The city is beautiful, the region is breath-taking, there is a lot of great handicrafts as well, and it's overall very cheap. Really lovely place. San Cristobal de las Casas Cd. Cuauht moc La Mesilla Guatemala Once again, this day started early. We woke up ready to leave at 7AM, and not earlier because the hotel's parking didn't open earlier. After a very quick visit to San Cristobal downtown, to take some photos that were not right the night before, we took the road to Comit n, stopping just for some tamales de bola y chipil n for breakfast. Central Chiapas is almost continuously populated, differing from most of my experience in Mexico. It is all humid, and has some very beautiful landscapes. We passed Comit n, which is a much larger city than what we expected, went downhill after La Trinitaria, crossed a plain, and continued until hills started taking over again. We stopped in a very chaotic, dirty place: Just accross the border, where Ciudad Cuauht moc becomes La Mesilla. This border was basically what we expected: There is no half-official place to exchange money, so we had to buy quetzales from somebody who offered them on the street, at MX$2 per Q1 (where the real exchange should be around 1.50 to 1). While on the road, I was half-looking for exchange posts in Comit n and onwards, and found none (and being a festive day, they would probably be closed anyway). But we were expecting this, after all, and exchanged just the basic minimum: MX$600 (US$50, which by magic became Q300, US$40). The tramit consists of:
  • Spraying the car against diseases (which has a cost of Q18)
  • Each of us has to go through migration. Note, in case you cross this border: We didn't expressly cross Mexican migration, so officially there was no record of us going out. Be sure to go through migration to avoid problems at re-entry!
    Migration has no cost.
  • Customs. As we were entering by car, I had to purchase a permit for circulation. I don't remember the exact quote, but it was around Q150, and the permit is valid for 90 days.
  • That's it! Welcome to Guatemala!
La Mesilla is in Guatemala's Huehuetenango Department, and from all of the Departments we crossed until Guatemala city (Huehuetenango, Quetzaltenango, Totonicap n, Solol , Chimaltenango, Sacatep quez and Guatemala), this is the largest one. Huehuetenango is home to the Cuchumatanes mountain ridge. We found beautiful, really steep, really fertile mountains. It is plainly amazing: Mountains over 60 , and quite often full with agricultural use Even at their steepest points! The CA-1 highway is, in general, in very good shape. There are however many (many, many) speed bumps (or topes, in Mexican terminology. Or t mulos in Guatemalan), at least a couple at every village we crossed, not always painted. The road is narrow and quite winding; it follows river streams for most of the way. We feared it would be in much worse shape, from what we have heard, but during the whole way we found only three points where the road was unusable due to landslides and an alternative road was always in place when we needed it. After Totonicap n, the narrow road becomes a wide (four lane) highway. Don't let that fool you! It still goes through the center of every village along the road, so it's really not meant for speeding. Also, even though the pavement is in very good condition, it is really steep quite often. It is not the easiest road to drive, but it's (again) by far not as bad as we expected. We arrived to Guatemala City as dawn was falling, and got promptly lost. Guatemala has a very strange organization scheme: The city is divided in several zones, laid out in a swirl-like fashion. East-west roads are called Calle and North-south roads are called Avenida (except for zona 4, I think, where they are diagonal, and some are Rutas while the others are V as). I won't go into too much detail). Thing is, many people told us it's a foolproof design, and people from different countries understand the system perfectly. We didn't... At least not when we arrived. We got quite lost, and it took us around one hour to arrive to our hotel, at almost 19:00 Almost 12 hours since we left San Cristobal. Went for a quick dinner, and then waited for our friends to arrive after the first day of work of ECSL, which we missed completely. And, of course, we were quite tired, so we didn't stay up much longer. Antigua Guatemala On Saturday, ECSL's activities started after 14:00 so we almost-kidnapped Wences, the local organization lead, and took him to show us around Antigua Guatemala. Antigua was the capital of Guatemala until an earthquake destroyed it in the 1770s; the capital was moved to present-day Guatemala city, but Antigua was never completely abandoned. Today, it is a world heritage site, a beautiful city, where we could/should have stayed for several days. But we were there for the conference, so we were in Antigua just a couple of hours, and headed back to Guatemala. Word of caution: Going from Guatemala to Antigua, we went down via the steepest road I have ever driven. Again, a real four-lane highway... but quite scary! The main focus for this post is to give some roadtrip advice to potential readers... So, this time around, I won't give much detail regarding ECSL. It was quite interesting, we had some very good discussions... but it would take me too much space to talk about it! The road back: Guatemala Tec n Um n; Cd. Hidalgo Arriaga So, about the road back: Yes, we just spent three days getting to Guatemala City. We were there only for ~36 hours. And... We needed to be here by Tuesday morning no matter what. So, Sunday at noon we said goodbye to our good friends in ECSL and started the long way back. To get to know more of Guatemala, we went back by the CA-2 highway, which goes via the coastal plains Not close to the Pacific ocean, which we didn't get to see at all, but not through the mountains. To get to CA-2, we took CA-9 from Guatemala. If I am not mistaken, this is the only toll road in Guatemala (at least, the only we used, and we used some pretty good highways!) It is not expensive; I don't remember right now, but must have been around Q20 (US$3). Went South past Palin and until CA-2, just outside Escuintla city, and headed West. All of Escuintla and Suchitep quez it is again a four lane highway; somewhere in Retalhueu it becomes a two lane highway. We were strongly advised not to take this road at night because, as the population density is significantly lower than in CA-1, it can get lonely at times And there are several reports of robberies. We did feel the place much less populated, but saw nothing suspicious in any way. Something important: There are almost no speedbumps in CA-2! The terrain stayed quite flat and easy as we crossed Quetzaltenango, and only in San Marcos we found some interesting hills and a very strong rain that would intermitetntly accompany us for the rest of the ride. So, we finally arrived to the border city of Tec n Um n at around 16:30 Approximately four hours after leaving the capital. The Tec n Um n Cd. Hidalgo cities and border pass are completely different from the disorderly and dirty Cd. Cuauht moc La Mesilla ones. The city of Tec n Um n could be just a nice town anywhere in the country, it does not feel aggressive as most border cities I have seen in our continent. We stopped to eat at El pollo campero and headed to the border. In the Mexican side, we also saw a very well consolidated, big and ordered migration area. Migration officers were very kind and helpful As we left Cd. Cuauht moc, Regina didn't get a stamp of leaving Mexico, so technically she was ilegally out of the country (as she is not a national... They didn't care about the rest of us). The tramit to fix this was easy, simple, straightforward. We only paid for the fumigation again (MX$60, US$5), and were allowed to leave. Anyway, we crossed the border. There is a ~30Km narrow road between Cd. Hidalgo and Tapachula, but starting in Tapachula we went on Northwards via a very good, four lane and very straight highway. Even though we had agreed not to drive at night... Well, we were quite hurried and still too far from Mexico City, so we decided to push it for three more hours, following the coastline until the city of Arriaga, almost at the border between Chiapas and Oaxaca. Found a little hotel to sleep some hours and collapsed. Word of warning: This road (from Tapachula to Arriaga) is also known for its robberies. We saw only one suspicious thing: Two guys were pushing up their motorcycle, from which they had apparently fallen. We didn't stop, as they looked healthy and not much in need of help, but later on talked about this Even though this was at night, they were not moving as if they had just crashed; nothing was scratched, not the motorcycle and not their clothes. That might have been an attempt to mug us (or whoever stopped by). This highway is very lonely, and the two directions are separated by a wall of vegetation, so nobody would have probably seen us were we to stop for some minutes. Be aware if you use this road! The trip comes to an end: Arriaga Niltepec Istmo C rdoba M xico The next (last, finally!) day, we left at 6:30AM. After driving somewhat over one hour, we arrived to Niltepec, where a group of taxi drivers had the highway closed as a protest against their local government's tolerance of mototaxis. We evaluated going back to Arriaga and continue via the Tuxtla Guti rrez highway, but that would have been too long. We had a nice breakfast of tlayudas (which resulted in Pooka getting an alergic reaction shortly afterwards) and, talking with people here and there, were told about an alternative route by an agricultural road that surrounds the blockade. So, we took this road the best way we could, and after probably 1hr of driving at 20Km/h, finally came back to the main road. We planned on crossing the isthmus using the Acayucan-Juchit n road We were amazed at the La Ventosa ("the windy") area, where we crossed a huge eolic plant for electricity generation, so of course we got our good share of photos. From then onwards, not much more worth mention. Crossed the isthmus via a quite secondary road in not too good shape (although there is a lot of machinery, and the road will much likely improve in the next few months/years), then took the toll freeway along Veracruz until C rdoba. We stopped for a (delilcious and revigorizing!) cup of coffee in Hotel Zeballos, where Agust n de Iturbide signed with Viceroy Juan O'Donoj the treaties that granted Mexico the independence. Traveller, beware: When crossing between Puebla and Veracruz, there is a steep slope of almost 1000m where , you will almost always (except if it's close to noon) find very thick fog; taking the highway from C rdoba, this is in the region known as Cumbres de Maltrata. We had the usual fog, and just as we left it, a thin but constant rain that went on with us up until we got home. Crossed Puebla state with no further eventualities, and arrived to Pooka and Moni's house by 22:00. Less than one hour later, Regina and me arrived home as well. This was four days ago... and I have finally finished writing it all down ;-) Hope you find this useful, or if not, at least entertaining! If you read this post in my blog, you will find many pictures taken along the trip below (Well, if you are reading the right page, not in the general blog index...). If you are reading from a planet or other syndication service... Well, come to the blog! Dreamhost woes Oh, and... Yes, it sometimes happens: My blog is hosted at Dreamhost. This means that usually it works correctly... But sometimes, specially when many people request many nontrivial pages, it just gives an error. If you get an error, reload once or twice... Or until your patience manages ;-)

11 August 2012

Russ Allbery: Review: Design Patterns

Review: Design Patterns, by Erich Gamma, et al.
Author: Erich Gamma
Author: Richard Helm
Author: Ralph Johnson
Author: John Vlissides
Publisher: Addison-Wesley
Copyright: 1995
Printing: September 1999
ISBN: 0-201-63361-2
Format: Hardcover
Pages: 374
Design Patterns: Elements of Reusable Object-Oriented Software by the so-called "Gang of Four" (Gamma, Helm, Johnson, and Vlissides) is one of the best-known books ever written about software design, and one of the most widely cited. The language introduced here, including the names of specific design patterns, is still in widespread use in the software field, particularly with object-oriented languages. I've had a copy for years, on the grounds that it's one of those books one should have a copy of, but only recently got around to reading it. The goal of this book is to identify patterns of design that are widely used, and widely useful, for designing object-oriented software. It's specific to the object-oriented model; while some of the patterns could be repurposed for writing OO-style programs in non-OO languages, they are about inheritance, encapsulation, and data hiding and make deep use the facilities of object-oriented design. The patterns are very general, aiming for a description that's more general than any specific domain. They're also high-level, describing techniques and methods for constructing a software system, not algorithms. You couldn't encapsulate the ideas here in a library and just use them; they're ideas about program structure that could be applied to any program with the relevant problem. With the benefit of seventeen years of hindsight, I think the primary impact of this book has been on communication within the field. The ideas in here are not new to this book. Every pattern in Design Patterns was already in use in the industry before it was published; the goal was taxonomy, not innovation. One would not come to Design Patterns to learn how to program, although most introductory texts on object-oriented programming now borrow much of the pattern terminology. Rather, Design Patterns is as influential as it is because it introduced a shared terminology and a rigor around that terminology, allowing writers and programmers to put a name to specific program structures and thus talk about them more clearly. This also allows one to take a step back and see a particular structure in multiple programs, compare and contrast how it's used, and draw some general conclusions about where it would be useful. I have the feeling that the authors originally hoped the book would serve as a toolbox, but I think it's instead become more of a dictionary. The pattern names standardized here are widely used even by people who have never read this book, but I doubt many people regularly turn to this book for ideas for how to structure programs. Design Patterns is divided into two parts: a general introduction to and definition of a software pattern followed by a case study, and then a catalog of patterns. The catalog is divided into creational patterns (patterns for creating objects), structural patterns (patterns for composing objects into larger structures), and behavioral patterns (patterns for interactions between objects). Each pattern in turn follows a very rigid presentation structure consisting of the pattern name and high-level classification, its basic intent, other common names, a scenario that motivates the pattern, comments on the applicability of the pattern, the structure and classes or objects that make up the pattern, how those participants collaborate, how the pattern achieves its goals, comments on implementation issues, sample code, known uses of the pattern in real-world software, and related patterns. As with a dictionary, the authors go to great lengths to keep the structure, terminology, and graphical representations uniform throughout, and the cross-referencing is comprehensive (to the point of mild redundancy). As for the patterns themselves, their success, both as terminology and as useful design elements, varies. Some have become part of the core lexicon of object-oriented programming (Factory Method, Builder, Singleton), sometimes to the point of becoming syntactic elements in modern OO languages (Iterator). These are terms that working programmers use daily. Others aren't quite as widespread, but are immediately recognizable as part of the core toolkit of object-oriented programming (Adapter, Decorator, Proxy, Observer, Strategy, Template Method). In some cases, the technique remains widespread, but the name hasn't caught on (Command, for example, which will be immediately familiar but which I rarely hear called by that name outside of specific uses inside UI toolkits due to ambiguity of terms). Other patterns are abstract enough that it felt like a bit of a reach to assign a name to them (Bridge, Composite, Facade), and I don't think use of those names is common, but the entries are still useful for definitional clarity and for comparing similar approaches with different implications. Only one pattern (Interpreter) struck me as insufficiently generic to warrant recording in a catalog of this type. So far, so good, but the obvious question arises: if you've not already read this book, should you read it? I think the answer to that is debatable. The largest problem with Design Patterns is that it's old. It came late enough in the development of object-oriented programming that it does capture much of the foundation, but OO design has continued to change and grow, and some patterns have either been developed subsequently or have become far more important. For example, Model-View-Controller is conspicuous by its absence, mentioned only in passing in the discussion of the Observer pattern. Any pattern catalog written today would have an extensive discussion. Similarly absent are Inversion of Control and Data Access Object, which are much more central to the day-to-day world of the modern programmer than, say, Memento or Visitor. One could easily go on: Lazy Initialization, Mock Object, Null Object... everyone will have their own list. A related problem is that all the implementation examples are shown in either C++ or Smalltalk (occasionally both). Those were probably the best languages to use at the time, but it's doubtful a modern rewrite would choose them. Smalltalk, in particular, I found nearly incomprehensible for the uninitiated, to the point where I ignored the code and only read the surrounding English description. C++ fares better, but Design Patterns occasionally drifts off into tedious discussions of how to work around C++'s limitations in ways that are irrelevant to the pattern itself and would not be necessary in, say, Java or Python. (This is ameliorated by the fact that C++, unlike Smalltalk, is still in widespread use, so those discussions remain moderately helpful for some readers.) Design Patterns is not, therefore, a very good source for a working knowledge of the most common patterns in use today. It has also become somewhat obsolete via its own success: the concept of a design pattern has become so popular that nearly all introductory texts include at least a basic discussion of design patterns and an introduction to the most notable and useful patterns. I think that's a more comfortable and more efficient way to pick up the basics than reading through this book, which is somewhat dense and which expects from the reader a reasonably good working knowledge of object-oriented programming. And, once you have the basics, MVC, DAO, and similar design patterns are probably more important than the more subtle design patterns presented here. That said, I think the rigor of description and the comparisons and discussions here still have some value. Design Patterns encourages the reader to look at patterns from a higher-level perspective, to think about meta-patterns such as the balance between data hiding and access, or between structure designed for the present purpose and structure that's adaptable to future needs. It's also mildly interesting from a historical standpoint; one can see the inspiration for future language designers in how problems are described here, and see how many of the implementation issues and negative consequences have been corrected or simplified by richer language designs. Overall, I would hesitate to recommend buying this book today, particularly at new textbook prices. But if you're a working object-oriented software designer or programmer, I think it's worth checking out from a library (and, thanks to its influence, any library with a decent software design section will almost certainly have a copy). Read the overall discussion, skim the catalog, and read the discussion of the patterns that strike your interest. It may help provide some additional structure and perspective to how you think about OO design. Rating: 6 out of 10

26 June 2012

Russ Allbery: Review: Quiet

Review: Quiet, by Susan Cain
Publisher: Crown
Copyright: 2012
ISBN: 0-307-45220-4
Format: Kindle
Pages: 263
I've always been an introvert. This is something that surprises some people when they first meet me since they equate introversion with shyness, and I'm not at all shy. It surprises others because I'm quite vocal and outspoken in meetings, but introversion also doesn't imply a lack of self-confidence. I can run meetings, give presentations, and argue my perspective in front of just about anyone, but I don't like parties, I crave time by myself, and I could happily go for weeks without seeing another human being. I'm an introvert because I find people draining rather than invigorating, written communication far easier and more comfortable than spoken, and superficial social contact more irritating and frustrating than enjoyable. If you think think that means there may be something wrong with me, or that I would be happier if "drawn out of my shell," I wish you would read this book. But I suspect its core audience will be people like me: those who are tired of being pushed to conform with extrovert beliefs about social interaction, those who are deeply disgusted by the word "antisocial" or feel pangs of irrational guilt when hearing it, or those who just want to read an examination of interpersonal interactions that, for once, is written by and about people who like quiet and solitude just like they do. I first encountered Susan Cain via her TED talk, which I think is both the best possible summary of this book and the best advertisement for it. If you've not already seen it, watch it; it's one of the best TED talks I've seen, good enough that I've watched it three times. If you then want more of the same, buy Quiet. Quiet has, I think, three messages. First, it's a tour of the science: what is introversion and extroversion? Is there evidence that these are real physiological differences? (Spoiler: yes.) What do we know about introversion? How do we know those things what experiments have been done and what methods have been used? Here, it's a good general introduction, although Cain is careful to point out that it only scratches the surface and there's much more scientific depth. For example, she touches on the connections between introversion and sensitivity to stimulus and points out that they're two separate, if related, categorizations, but doesn't have the space here to clarify the distinctions and tease them apart. But she lays a reasonable foundation, particularly in defense of introversion as a natural, physiologically grounded, scientifically analyzable, common, and healthy way of interacting with the world. (For those who are curious about the distinctions between introversion and sensitivity, and the argument that most of what Cain says here about introversion is actually about sensitivity, see the blog post by Elaine Aron.) The second message, the one that resonated with me the most, was Cain's passionate defense of introversion. Business culture (at least in the United States, which is what both Cain and I know) is strongly biased towards extroversion; at least faking extroversion seems to be required for some career advancement. Extrovert culture dominates politics and most public discourse. It's common to find people who consider introversion, particularly in children, to be a sign of unhappiness, poor social adjustment, psychological problems, or other issues that should be "cured" or changed. Cain's gentle but firm passion in defense of introversion is a breath of fresh air. She attacks open plan offices, the current obsession with group learning and social school settings, and the modern group-think bias towards collaboration over solitude and concentration, and she does that with a combination of polite frustration and the conclusions of multiple studies. Introverts will be cheering as she constructs solid arguments and musters evidence against things that we've always found miserable and then been told we were wrong, short-sighted, or socially inept for finding miserable. I am so utterly on her side in this argument that I have no way of knowing how persuasive it will be, but it's lovely just to hear someone put into words what I feel. This defense does skew the book. Quiet is not, and does not purport to be, an even-handed presentation of introversion and extroversion. It's written proudly and unabashedly from the introvert's point of view. I'm fine with that: I, like Cain, think the US is saturated in extrovert perspectives and extrovert advice, particularly in the business world, and could use some balancing by activism from the other perspective. But be aware that this is not the book to look to for an objective study of all angles of the introvert/extrovert dichotomy, and I'm not sure her descriptions of extroversion are entirely fair or analogous to those of introversion. The extroversion described here seems somewhat extreme to me. I'm dubious how many extroverts would recognize themselves in it, which partly undermines the argument. The third message of the book, once Cain has won the introvert's heart, is some advice on how to be a proud introvert, to make the space and find the quiet that one desires, and to balance that against places where one may want and need to act like an extrovert for a while. Cain thankfully does not try to make this too much of the book, nor does she hold up any particular approach as The Answer. All the answers are going to be individual. But she does offer some food for thought, particularly around how to be conscious of and make delibrate choices about one's energy expenditures and one's recharge space. She also captures beautifully something that I've not seen explained this well before: the relief that an introvert can feel in the company of an extrovert who helps navigate social situations, make conversation, and keep discussions going until they can reach the depth and comfort level where the introvert can engage. I wish anyone in a position of authority over social situations would read this book, or at least watch the TED talk and be aware of the issues. Particularly managers, since (at least in my relatively limited experience) workplace culture is so far skewed towards extroversion that it can be toxic to introverts. Many of the techniques used by extrovert managers, and the goals and advice they give their employees, are simply wrong for introverts, and even damaging. Cain speaks well to the difficulties of empathy between people with very different interaction preferences, such as the problems with extroverts trying to "draw out" introverts who have hit social overload (or sensitive people who have hit stimulus overload). She also discusses something that I'd previously not thought about, namely how the pressure towards extroversion leads people to act extroverted even when naturally introverted, and how it's therefore very difficult to tell from behavior (and sometimes even to tell internally!) what one's natural interaction style is. But mostly I recommend this book if you're an introvert, if the TED talk linked above speaks to you. Even if we can't convince the world to respect introversion more, or at least stop treating it as abnormal, it's a lovely feeling to read a book from someone who gets it. Who understands. Who fills a book with great stories about introverts and how they construct their worlds and create quiet space in which to be themselves. Rating: 9 out of 10

7 May 2012

Lars Wirzenius: Quality of discussion in free software development

The Online Photographer has a meta-article on some discussion in the photography world. Summary: someone wrote an opinion piece on one site, and people on the discussion forum of another site got his name wrong, possibly repeatedly. And the quality of the discussion went down from there. The quality of the discourse of free software development is frequently of some concern. Debian has a reputation as being a host to, er, particularly vigorous discussions. That reputation is not unwarranted, but, I think, we've improved a lot since 2005. The problem is hardly restricted to Debian, however. How can we improve this? I don't know. As a community, I'm not even sure we agree what the problems are. Here's my list. Insults, personal attacks, and other such outrageously bad behavior is uncommon. It crosses the line so clearly it becomes easy to deal with; I don't think handling this needs much attention. What can we do about this? I'm not sure. I have, for the time being, abandonded Debian mailing lists as a way to influence what goes on in the project, but that's just a way for me to clear some space in my head and time in my day to actually do things. My pet hypothetical solution of the day is that mailing lists might raise the quality of the debates by limiting the number of messages written by each person per day in each thread. This might, I think, induce people to write with more thought and put more effort into making each message count.

9 February 2012

Matthew Garrett: Is GPL usage really declining?

Matthew Aslett wrote about how the proportion of projects released under GPL-like licenses appears to be declining, at least as far as various sets of figures go. But what does that actually mean? In absolute terms, GPL use has increased - any change isn't down to GPL projects transitioning over to liberal licenses. But an increasing number of new projects are being released under liberal licenses. Why is that?

The figures from Black Duck aren't a great help here, because they tell us very little about the software they're looking at. FLOSSmole is rather more interesting. I pulled the license figures from a few sites and found the following proportion of GPLed projects:

RubyForge: ~30%
Google Code: ~50%
Launchpad: ~70%

I've left the numbers rough because there's various uncertainties - should proprietary licenses be included in the numbers, is CC Sharealike enough like the GPL to count it there, that kind of thing. But what's clear is that these three sites have massively different levels of GPL use, and it's not hard to imagine why. They all attract different types of developer. The RubyForge figures are obviously going to be heavily influenced by Ruby developers, and that (handwavily) implies more of a bias towards web developers than the general developer population. Launchpad, on the other hand, is going to have a closer association with people with an Ubuntu background - it's probably more representative of Linux developers. Google Code? The 50% figure is the closest to the 56.8% figure that Black Duck give, so it's probably representative of the more general development community.

The impression gained from this is that the probability of you using one of the GPL licenses is influenced by the community that you're part of. And it's not a huge leap to believe that an increasing number of developers are targeting the web, and the web development community has never been especially attached to the GPL. It's not hard to see why - the benefits of the GPL vanish pretty much entirely when you're never actually obliged to distribute the code, and while Affero attempts to compensate from that it also constrains your UI and deployment model. No matter how strong a believer in Copyleft you are, the web makes it difficult for users to take any advantage of the freedoms you'd want to offer. It's as easy not to bother.
So it's pretty unsurprising that an increase in web development would be associated with a decrease in the proportion of projects licensed under the GPL.

This obviously isn't a rigorous analysis. I have very little hard evidence to back up my assumptions. But nor does anyone who claims that the change is because the FSF alienated the community during GPLv3 development. I'd be fascinated to see someone spend some time comparing project type with license use and trying to come up with a more convincing argument.

(Raw data from FLOSSmole: Howison, J., Conklin, M., & Crowston, K. (2006). FLOSSmole: A collaborative repository for FLOSS research data and analyses. International Journal of Information Technology and Web Engineering, 1(3), 17 26.)

comment count unavailable comments

3 January 2012

Matthew Garrett: TVs are all awful

A discussion a couple of days ago about DPI detection (which is best summarised by this and this and I am not having this discussion again) made me remember a chain of other awful things about consumer displays and EDID and there not being enough gin in the world, and reading various bits of the internet and wikipedia seemed to indicate that almost everybody who's written about this has issues with either (a) technology or (b) English, so I might as well write something.

The first problem is unique (I hope) to 720p LCD TVs. 720p is an HD broadcast standard that's defined as having a resolution of 1280x720. A 720p TV is able to display that image without any downscaling. So, naively, you'd expect them to have 1280x720 displays. Now obviously I wouldn't bother mentioning this unless there was some kind of hilarious insanity involved, so you'll be entirely unsurprised when I tell you that most actually have 1366x768 displays. So your 720p content has to be upscaled to fill the screen anyway, but given that you'd have to do the same for displaying 720p content on a 1920x1080 device this isn't the worst thing ever in the world. No, it's more subtle than that.

EDID is a standard for a blob of data that allows a display device to express its capabilities to a video source in order to ensure that an appropriate mode is negotiated. It allows resolutions to be expressed in a bunch of ways - you can set a bunch of bits to indicate which standard modes you support (1366x768 is not one of these standard modes), you can express the standard timing resolution (the horizontal resolution divided by 8, followed by an aspect ratio) and you can express a detailed timing block (a full description of a supported resolution).

1366/8 = 170.75. Hm.

Ok, so 1366x768 can't be expressed in the standard timing resolution block. The closest you can provide for the horizontal resolution is either 1360 or 1368. You also can't supply a vertical resolution - all you can do is say that it's a 16:9 mode. For 1360, that ends up being 765. For 1368, that ends up being 769.

It's ok, though, because you can just put this in the detailed timing block, except it turns out that basically no TVs do, probably because the people making them are the ones who've taken all the gin.

So what we end up with is a bunch of hardware that people assume is 1280x720, but is actually 1366x768, except they're telling your computer that they're either 1360x765 or 1368x769. And you're probably running an OS that's doing sub-pixel anti-aliasing, which requires that the hardware be able to address the pixels directly which is obviously difficult if you think the screen is one size and actually it's another. Thankfully Linux takes care of you here, and this code makes everything ok. Phew, eh?

But ha ha, no, it's worse than that. And the rest applies to 1080p ones as well.

Back in the old days when TV signals were analogue and got turned into a picture by a bunch of magnets waving a beam of electrons about all over the place, it was impossible to guarantee that all TV sets were adjusted correctly and so you couldn't assume that the edges of a picture would actually be visible to the viewer. In order to put text on screen without risking bits of it being lost, you had to steer clear of the edges. Over time this became roughly standardised and the areas of the signal that weren't expected to be displayed were called overscan. Now, of course, we're in a mostly digital world and such things can be ignored, except that when digital TVs first appeared they were mostly used to watch analogue signals so still needed to overscan because otherwise you'd have the titles floating weirdly in the middle of the screen rather than towards the edges, and so because it's never possible to kill technology that's escaped into the wild we're stuck with it.

tl;dr - Your 1920x1080 TV takes a 1920x1080 signal, chops the edges off it and then stretches the rest to fit the screen because of decisions made in the 1930s.

So you plug your computer into a TV and even though you know what the resolution really is you still don't get to address the individual pixels. Even worse, the edges of your screen are missing.

The best thing about overscan is that it's not rigorously standardised - different broadcast bodies have different recommendations, but you're then still at the mercy of what your TV vendor decided to implement. So what usually happens is that graphics vendors have some way in their drivers to compensate for overscan, which involves you manually setting the degree of overscan that your TV provides. This works very simply - you take your 1920x1080 framebuffer and draw different sized black borders until the edge of your desktop lines up with the edge of your TV. The best bit about this is that while you're still scanning out a 1920x1080 mode, your desktop has now shrunk to something more like 1728x972 and your TV is then scaling it back up to 1920x1080. Once again, you lose.

The HDMI spec actually defines an extension block for EDID that indicates whether the display will overscan or not, but doesn't provide any way to work out how much it'll overscan. We haven't seen many of those in the wild. It's also possible to send an HDMI information frame that indicates whether or not the video source is expecting to be overscanned or not, but (a) we don't do that and (b) it'll probably be ignored even if we did, because who ever tests this stuff. The HDMI spec also says that the default behaviour for 1920x1080 (but not 1366x768) should be to assume overscan. Charming.

The best thing about all of this is that the same TV will often have different behaviour depending on whether you connect via DVI or HDMI, but some TVs will still overscan DVI. Some TVs have options in the menu to disable overscan and others don't. Some monitors will overscan if you feed them an HD resolution over HDMI, so if you have HD content and don't want to lose the edges then your hardware needs to scale it down and let the display scale it back up again. It's all awful. I recommend you drink until everything's already blurry and then none of this will matter.

comment count unavailable comments

28 November 2011

Dirk Eddelbuettel: A Story of Life and Death. On CRAN. With Packages.

The Comprehensive R Archive Network, or CRAN for short, has been a major driver in the success and rapid proliferation of the R statistical language and environment. CRAN currently hosts around 3400 packages, and is growing at a rapid rate. Not too long ago, John Fox gave a keynote lecture at the annual R conference and provided a lot of quantitative insight into R and CRAN---including an estimate of an incredible growth rate of 40% as a near-perfect straight line on a log-log chart! So CRAN does in fact grow exponentially. (His talk morphed into this paper in the R Journal, see figure 3 for this chart.) The success of CRAN is due to a lot of hard work by the CRAN maintainers, lead for many years and still today by Kurt Hornik whose dedication is unparalleled. Even at the current growth rate of several packages a day, all submissions are still rigorously quality-controlled using strong testing features available in the R system. And for all its successes, and without trying to sound ungrateful, there have always been some things missing at CRAN. It has always been difficult to keep a handle on the rapidly growing archive. Task Views for particular fields, edited by volunteers with specific domain knowledge (including yours truly) help somewhat, but still cannot keep up with the flow. What is missing are regular updates on packages. What is also missing is a better review and voting system (and while Hadley Wickham mentored a Google Summer of Code student to write CRANtastic, it seems fair to say that this subproject didn't exactly take off either). Following useR! 2007 in Ames, I decided to do something and noodled over a first design on the drive back to Chicago. A weekend of hacking lead to CRANberries. CRANberries uses existing R functions to learn which packages are available right now, and compares that to data stored in a local SQLite database. This is enough to learn two things: First, which new packages were added since the last run. That is very useful information, and it feeds a website with blog subscriptions (for the technically minded: an RSS feed, at this URL). Second, it can also compare current versions numbers with the most recent stored version number, and thereby learns about updated packages. This too is useful, and also feeds a website and RSS stream (at this URL; there is also a combined one for new and updated packages.) CRANberries writes out little summaries for both new packages (essentially copying what the DESCRIPTION file contains), and a quick diffstat summary for updated packages. A static blog compiler munges this into static html pages which I serve from here, and creates the RSS feed data at the same time. All this has been operating since 2007. Google Reader tells me the the RSS feed averages around 137 posts per week, and has about 160 subscribers. It does feed to Planet R which itself redistributes so it is hard to estimate the absolute number of readers. My weblogs also indicate a steady number of visits to the html versions. The most recent innovation was to add tweeting earlier in 2011 under the @CRANberriesFeed Twitter handle. After all, the best way to address information overload and too many posts in our RSS readers surely is to ... just generate more information and add some Twitter noise. So CRANberries now tweets a message for each new package, and a summary message for each set of new packages (or several if the total length exceeds the 140 character limit). As of today, we have sent 1723 tweets to what are currently 171 subscribers. Tweets for updated packages were added a few months later. Which leads us to today's innovation. One feature which has truly been missing from CRAN was updates about withdrawn packages. Packages can be withdrawn for a number of reasons. Back in the day, CRAN carried so-called bundles carrying packages inside. Examples were VR and gregmisc. Both had long been split into their component packages, making VR and gregmisc part of the set of packages no longer on the top page of CRAN, but only its archive section. Other examples are packages such as Design, which its author Frank Harrell renamed to rms to match to title of the book covering its methodology. And then there are of course package for which the maintainer disappeared, or lost interest, or was unable to keep up with quality requirements imposed by CRAN. All these packages are of course still in the Archive section of CRAN. But how many packages did disappear? Well, compared to the information accumulated by CRANberries over the years, as of today a staggering 282 packages have been withdrawn for various reasons. And at least I would like to know more regularly when this happens, if only so I have a chance to see if the retired package is one the 120+ packages I still look after for Debian (as happened recently with two Rmetrics packages). So starting with the next scheduled run, CRANberries will also report removed packages, in its own subtree of the website and its own RSS feed (which should appear at this URL). I made the required code changes (all of about two dozen lines), and did some light testing. To not overwhelm us all with line noise while we catch up to the current steady state of packages, I have (temporarily) lowered the frequency with which CRANberries is called by cron. I also put a cap on the number of removed packages that are reported in each run. As always with new code, there may be a bug or two but I will try to catch up in due course. I hope this is of interest and use to others. If so, please use the RSS feeds in your RSS readers, and subscribe to the @CRANberriesFeed, And keep using CRAN, and let's all say thanks to Kurt, Stefan, Uwe, and everybody who is working on CRAN (or has been in the past).

17 November 2011

Rapha&#235;l Hertzog: People Behind Debian: Mark Shuttleworth, Ubuntu s founder

I probably don t have to present Mark Shuttleworth he was already a Debian developer when he became millionaire after having sold Thawte to Verisign in 1999. Then in 2002 he became the first African (and first Debian developer) in space. 2 years later, he found another grandiose project to pursue: bring the Microsoft monopoly to an end with a new alternative operating system named Ubuntu (see bug #1). I have met Mark during Debconf 6 in Oaxtepec (Mexico), we were both trying to find ways to enhance the collaboration between Debian and Ubuntu. The least I can say is that Mark is opinionated but any leader usually is, and in particular the self-appointed ones! :-) Read on to discover his view on the Ubuntu-Debian relationship and much more. Raphael: Who are you? Mark: At heart I m an explorer, inventor and strategist. Change in technology, society and business is what fascinates me, and I devote almost all of my time and wealth to the catalysis of change in a direction that I hope improves society and the environment. I m 38, studied information systems and finance at the University of Cape Town. My hearts home is Cape Town, and I ve lived there and in Star City and in London, now I live in the Isle of Man with my girlfriend Claire and 14 precocious ducks. I joined Debian in around 1995 because I was helping to setup web servers for as many groups as possible, and I thought Debian s approach to packaging was very sensible but there was no package for Apache. In those days, the NM process was a little easier ;-) Raphael: What was your initial motivation when you decided to create Ubuntu 7 years ago? Mark: Ubuntu is designed to fulfill a dream of change; a belief that the potential of free software was to have a profound impact on the economics of software as well as its technology. It s obvious that the technology world is enormously influenced by Linux, GNU and the free software ecosystem, but the economics of software are still essentially unchanged. Before Ubuntu, we have a two-tier world of Linux: there s the community world (Debian, Fedora, Arch, Gentoo) where you support yourself, and the restricted, commercial world of RHEL and SLES/SLED. While the community distributions are wonderful in many regards, they don t and can t meet the needs of the whole of society; one can t find them pre-installed, one can t get certified and build a career around them, one can t expect a school to deploy at scale a platform which is not blessed by a wide range of institutions. And the community distributions cannot create the institutions that would fix that. Ubuntu brings those two worlds together, into one whole, with a commercial-grade release (inheriting the goodness of Debian) that is freely available but also backed by an institution. The key to that dream is economics, and as always, a change in economics; it was clear to me that the flow of money around personal software would change from licensing ( buying Windows ) to services ( paying for your Ubuntu ONE storage ). If that change was coming, then there might be room for a truly free, free software distribution, with an institution that could make all the commitments needed to match the commercial Linux world. And that would be the achievement of a lifetime. So I decided to dedicate a chunk of my lifetime to the attempt, and found a number of wonderful people who shared that vision to help with the attempt. It made sense to me to include Debian in that vision; I knew it well as both a user and insider, and believed that it would always be the most rigorous of the community distributions. I share Debian s values and those values are compatible with those we set for Ubuntu.
Debian would always be the most rigorous of the community distributions.
Debian on its own, as an institution, could not be a partner for industry or enterprise. The bits are brilliant, but the design of an institution for independence implies making it difficult to be decisive counterparty, or contractual provider. It would be essentially impossible to achieve the goals of pre-installation, certification and support for third-party hardware and software inside an institution that is designed for neutrality, impartiality and independence. However, two complementary institutions could cover both sides of this coin. So Ubuntu is the second half of a complete Debian-Ubuntu ecosystem. Debian s strengths complement Ubuntu s, Ubuntu can achieve things that Debian cannot (not because its members are not capable, but because the institution has chosen other priorities) and conversely, Debian delivers things which Ubuntu cannot, not because its members are not capable, but because it chooses other priorities as an institution. Many people are starting to understand this: Ubuntu is Debian s arrow, Debian is Ubuntu s bow. Neither instrument is particularly useful on its own, except in a museum of anthropology ;)
Ubuntu is Debian s arrow, Debian is Ubuntu s bow.
So the worst and most frustrating attitude comes from those who think Debian and Ubuntu compete. If you care about Debian, and want it to compete on every level with Ubuntu, you are going to be rather miserable; you will want Debian to lose some of its best qualities and change some of its most important practices. However, if you see the Ubuntu-Debian ecosystem as a coherent whole, you will celebrate the strengths and accomplishments of both, and more importantly, work to make Debian a better Debian and Ubuntu a better Ubuntu, as opposed to wishing Ubuntu was more like Debian and vice versa. Raphael: The Ubuntu-Debian relationship was rather hectic at the start, it took several years to mature . If you had to start over, would you do some things differently? Mark: Yes, there are lessons learned, but none of them are fundamental. Some of the tension was based on human factors that cannot really be altered: some of the harshest DD critics of Canonical and Ubuntu are folk who applied for but were not selected for positions at Canonical. I can t change that, and wouldn t change that, and would understand the consequences are, emotionally, what they are. Nevertheless, it would have been good to be wiser about the way people would react to some approaches. We famously went to DebConf 5 in Porto Allegre and hacked in a room at the conference. It had an open door, and many people popped a head in, but I think the not-a-cabal collection of people in there was intimidating and the story became one of exclusion. If we d wanted to be exclusive, we would have gone somewhere else! So I would have worked harder to make that clear at the time if I d known how many times that story would be used to paint Canonical in a bad light. As for engagement with Debian, I think the situation is one of highs and lows. As a high, it is generally possible to collaborate with any given maintainer in Debian on a problem in which there is mutual interest. There are exceptions, but those exceptions are as problematic within Debian as between Debian and outsiders. As a low, it is impossible to collaborate with Debian as an institution, because of the design of the institution.
It is generally possible to collaborate with any given maintainer [ ] [but] it is impossible to collaborate with Debian as an institution.
In order to collaborate, two parties must make and keep commitments. So while one Debian developer and one Ubuntu developer can make personal commitments to each other, Debian cannot make commitments to Ubuntu, because there is no person or body that can make such commitments on behalf of the institution, on any sort of agile basis. A GR is not agile ;-) . I don t say this as a critique of Debian; remember, I think Debian has made some very important choices, one of those is the complete independence of its developers, which means they are under no obligation to follow a decision made by anyone else. It s also important to understand the difference between collaboration and teamwork. When two people have exactly the same goal and produce the same output, that s just teamwork. When two people have different goals and produce different product, but still find ways to improve one anothers product, that s collaboration. So in order to have great collaboration between Ubuntu and Debian, we need to start with mutual recognition of the value and importance of the differences in our approach. When someone criticises Ubuntu because it exists, or because it does not do things the same way as Debian, or because it does not structure every process with the primary goal of improving Debian, it s sad. The differences between us are valuable: Ubuntu can take Debian places Debian cannot go, and Debian s debianness brings a whole raft of goodness for Ubuntu. Raphael: What s the biggest problem of Debian? Mark: Internal tension about the vision and goals of Debian make it difficult to create a harmonious environment, which is compounded by an unwillingness to censure destructive behaviour. Does Debian measure its success by the number of installs? The number of maintainers? The number of flamewars? The number of packages? The number of messages to mailing lists? The quality of Debian Policy? The quality of packages? The freshness of packages? The length and quality of maintenance of releases? The frequency or infrequency of releases? The breadth of derivatives? Many of these metrics are in direct tension with one another; as a consequence, the fact that different DD s prioritise all of these (and other goals) differently makes for interesting debate. The sort of debate that goes on and on because there is no way to choose between the goals when everyone has different ones. You know the sort of debate I mean :-) Raphael: Do you think that the Debian community improved in the last 7 years? If yes, do you think that the coopetition with Ubuntu partly explains it? Mark: Yes, I think some of the areas that concern me have improved. Much of this is to do with time giving people the opportunity to consider a thought from different perspectives, perhaps with the benefit of maturity. Time also allows ideas to flow and and of course introduces new people into the mix. There are plenty of DD s now who became DD s after Ubuntu existed, so it s not as if this new supernova has suddenly gone off in their galactic neighbourhood. And many of them became DD s because of Ubuntu. So at least from the perspective of the Ubuntu-Debian relationship, things are much healthier. We could do much better. Now that we are on track for four consecutive Ubuntu LTS releases, on a two-year cadence, it s clear we could collaborate beautifully if we shared a freeze date. Canonical offered to help with Squeeze on that basis, but institutional commitment phobia reared its head and scotched it. And with the proposal to put Debian s first planned freeze exactly in the middle of Ubuntu s LTS cycle, our alignment in interests will be at a minimum, not a maximum. Pure <facepalm />. Raphael: What would you suggest to people (like me) who do not feel like joining Canonical and would like to be paid to work on improving Debian? Mark: We share the problem; I would like to be paid to work on improving Ubuntu, but that s also a long term dream ;-) Raphael: What about using the earnings of the dormant Ubuntu Foundation to fund some Debian projects? Mark: The Foundation is there in the event of Canonical s failure to ensure that commitments, like LTS maintenance, are met. It will hopefully be dormant for good ;-) Raphael: The crowdfunding campaign for the Debian Administrator s Handbook is still going on and I briefly envisioned the possibility to create the Ubuntu Administrator s Handbook. What do you think of this project? Mark: Crowdfunding is a great match for free software and open content, so I hope this works out very well for you. I also think you d find a bigger market for an Ubuntu book, not because Ubuntu is any more important than Debian but because it is likely to appeal to people who are more inclined to buy or download a book than to dive into the source. Again, this is about understanding the difference in audiences, not judging the projects or the products. Raphael: Is there someone in Debian that you admire for their contributions? Mark: Zack is the best DPL since 1995; it s an impossible job which he handles with grace and distinction. I hope praise from me doesn t tarnish his reputation in the project!
Thank you to Mark for the time spent answering my questions. I hope you enjoyed reading his answers as I did.

Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on Identi.ca, Google+, Twitter and Facebook .

32 comments Liked this article? Click here. My blog is Flattr-enabled.

7 October 2011

Matthew Garrett: Margaret Dayhoff

It's become kind of a clich for me to claim that the reason I'm happy working on ACPI and UEFI and similarly arcane pieces of convoluted functionality is that no matter how bad things are there's at least some form of documentation and there's a well-understood language at the heart of them. My PhD was in biology, working on fruitflies. They're a poorly documented set of layering violations which only work because of side-effects at the quantum level, and they tend to die at inconvenient times. They're made up of 165 million bases of a byte code language that's almost impossible to bootstrap[1] and which passes through an intermediate representations before it does anything useful[2]. It's an awful field to try to do rigorous work in because your attempts to impose any kind of meaningful order on what you're looking at are pretty much guaranteed to be sufficiently naive that your results bear a resemblance to reality more by accident than design.

The field of bioinformatics is a fairly young one, and because of that it's very easy to be ignorant of its history. Crick and Watson (and those other people) determined the structure of DNA. Sanger worked out how to sequence proteins and nucleic acids. Some other people made all of these things faster and better and now we have huge sequence databases that mean we can get hold of an intractable quantity of data faster than we could ever plausibly need to, and what else is there to know?

Margaret Dayhoff graduated with a PhD in quantum chemistry from Columbia, where she'd performed computational analysis of various molecules to calculate their resonance energies[3]. The next few years involved plenty of worthwhile research that aren't relevant to the story, so we'll (entirely unfairly) skip forward to the early 60s and the problem of turning a set of sequence fragments into a single sequence. Dayhoff worked on a suite of applications called "Comprotein". The original paper can be downloaded here, and it's a charming look back at a rigorous analysis of a problem that anyone in the field would take for granted these days. Modern fragment assembly involves taking millions of DNA sequence reads and assembling them into an entire genome. In 1960, we were still at the point where it was only just getting impractical to do everything by hand.

This single piece of software was arguably the birth of modern bioinformatics, the creation of a computational method for taking sequence data and turning it into something more useful. But Dayhoff didn't stop there. The 60s brought a growing realisation that small sequence differences between the same protein in related species could give insight into their evolutionary past. In 1965 Dayhoff released the first edition of the Atlas of Protein Sequence and Structure, containing all 65 protein sequences that had been determined by then. Around the same time she developed computational methods for analysing the evolutionary relationship of these sequences, helping produce the first computationally generated phylogenetic tree. Her single-letter representation of amino acids was born of necessity[4] but remains the standard for protein sequences. And the atlas of 65 protein sequences developed into the Protein Information Resource, a dial-up database that allowed researchers to download the sequences they were interested in. It's now part of UniProt, the world's largest protein database.

Her contributions to the field were immense. Every aspect of her work on bioinformatics is present in the modern day larger, faster and more capable, but still very much tied to the techniques and concepts she pioneered. And so it still puzzles me that I only heard of her for the first time when I went back to write the introduction to my thesis. She's remembered today in the form of the Margaret Oakley Dayhoff award for women showing high promise in biophysics, having died of a heart attack at only 57.

I don't work on fruitflies any more, and to be honest I'm not terribly upset by that. But it's still somewhat disconcerting that I spent almost 10 years working in a field so defined by one person that I knew so little about. So my contribution to Ada Lovelace day is to highlight a pivotal woman in science who heavily influenced my life without me even knowing.

[1] You think it's difficult bringing up a compiler on a new architecture? Try bringing up a fruitfly from scratch.
[2] Except for the cases where the low-level language itself is functionally significant, and the cases where the intermediate representation is functionally significant.
[3] Something that seems to have involved a lot of putting punch cards through a set of machines, getting new cards out, and repeating. I'm glad I live in the future.
[4] The three-letter representation took up too much space on punch cards

comment count unavailable comments

19 September 2011

Keith Packard: MacBook-Air-2

Fixing the Sandbrige MacBook Air display initialization We left our hero with Debian installed on the MacBook Air, but with the display getting scrambled as soon as the i915 driver loaded. As was reported to Matthew, the problem is as simple as a lack of the right mode for the eDP panel in the machine. This mode is supposed to come from the panel EDID data, but for some reason the driver wasn t able to query the EDID data, and so it decided to try some random panel timings it dug out of the VBT tables, which are generally supposed to be used by LVDS panels. Apple helpfully stuck valid data there, but for some other panel one that is 1280x800 pixels instead of the 1366x768 pixel panel in the MacBook Air. I heard rumors that some machines would get a black screen when the i915 driver loaded. I was fortunate my machine simply displayed a 1366x768 subset of the programmed 1280x800 mode. A bit of garbage on the right side, and a few scanlines missing at the bottom. Quite workable, especially after I ran fbset -yres 768 to keep the console in the visible portion of the screen. DDC failure Looking through the kernel logs, the Intel driver tries to access the EDID data and times out, as if DDC is just completely broken. This is rather unexpected; the eDP spec says that the panel is required to support DDC and provide EDID. Now, we ve seen a lot of panels which don t quite live up to the rigorous eDP specifications, but it s a bit surprising from Apple, who generally do VESA stuff pretty well. We ve heard reports about panels reporting invalid EDID data, or EDID data which didn t actually match the panel (causing us to prefer the VBT data on LVDS machines). But I ve not heard of an eDP panel which didn t have anything hanging off of the DDC channel. But X works fine? During early debugging, I happened to start X up. Much to my surprise, X came up at the native 1366x768 mode. Digging through the kernel logs after that, I discovered that EDID was successfully fetched from the eDP panel while X started up. At this point, I knew it was all downhill the EDID data was present, it just wasn t getting picked up during the early part of the driver initialization when the console mode is initialized. eDP power management The CPU is given complete control over the power management of the eDP panel; sequencing through various power states and waiting appropriate amounts of time when things change. Given the goal of keeping power usage as low as possible, this makes a huge amount of sense. The eDP spec is quite clear though, without power, the panel will not respond to anything over the aux channel, and that includes EDID data. The eDP panel power hardware in the Sandybridge chip has a special mode for dealing with this requirement. If the panel is not displaying data, you can supply power for the aux channel stuff by setting a magic bit in the panel power registers. When initializing the frame buffer, the kernel driver turns off the panel completely so that it has all of the hardware in a known state (yeah, this is not optimal, but that s another bug). When X started, the panel was already running with the console mode. Given the difference between these two states EDID querying with the panel off failed, while EDID querying with the panel on worked, it seemed pretty clear that the panel power wasn t getting managed correctly. So, it seemed pretty clear that the magic power the panel bit wasn t getting turned on at the right times. Getting the power turned on. I stuck a check inside all of the aux channel communication functions to see where things were broken. This pointed out several places missing the panel power calls. This wasn t quite sufficient to get EDID data flowing. The remaining problem was that the code wasn t waiting long enough after turning the panel power on before starting the aux channel communication. A few msleep calls and huzzah! EDID at boot time and the console had the right mode. Making it faster However, it turns out that the driver does this a lot, and the msleeps required were fairly long the eDP panel wants a 500ms delay from turning the panel power off before you can turn it back on. I fix this by simply delaying the panel power off until things were idle for a long time. Now mode setting goes zipping through, and a few seconds later, the bit to force panel power on gets turned off. Getting these bits for yourself I ve pushed out the code to my (temporary) kernel repository git://people.freedesktop.org/~keithp/linux in the fix-edp-vdd-power branch. I d love to hear if you ve tried this on either a MacBook Air or any other eDP machine from Ironlake onwards.

8 September 2011

Russell Coker: Moving from a Laptop to a Cloud Lifestyle

My Laptop History In 1998 I bought my first laptop, it was a Thinkpad 385XD, it had a PentiumMMX 233MHz CPU, 96M of RAM, and an 800*600 display. This was less RAM than I could have afforded in a desktop system and the 800*600 display didn t compare well to the 1280*1024 resolution 17 inch Trinitron monitor I had been using. Having only 1/3 the pixels is a significant loss and a 12.1 inch TFT display of that era compared very poorly with a good Trinitron monitor. In spite of this I found it a much better system to use because it was ALWAYS with me, I used it for many things that were probably better suited to a PDA (there probably aren t many people who have carried a 7.1 pound (3.2Kg) laptop to as many places as I did). But some of my best coding was done on public transport. But I didn t buy my first laptop for that purpose, I bought it because I was moving to another country and there just wasn t any other option for having a computer. In late 1999 I bought my second laptop, it was a Thinkpad 600E [1]. It had twice the CPU speed, twice the RAM, and a 1024*768 display that displayed color a lot better. Since then I had another three Thinkpads, a T21, a T43, and now a T61. One of the ways I measure a display is the number of 80*25 terminal windows that I can display at one time, my first Thinkpad could display four windows with a significant amount of overlap. My second could display four with little overlap, my third (with 1280*1024 resolution) could display four clearly and another two with overlap, and my current Thinkpad does 1680*1050 and can display four windows clearly and another five without excessive overlap. For most of the last 13 years my Thinkpads weren t that far behind what I could afford to get as a desktop system, until now. A Smart Phone as the Primary Computing Device For the past 6 months the Linux system I ve used most frequently is my Sony Ericsson Xperia X10 Android phone [2]. Most of my computer use is on my laptop, but the many short periods of time using my phone add up. This has forced some changes to the way I work. I now use IMAP instead of POP for receiving mail so I can use my phone and my laptop with the same mail spool. This is a significant benefit for my email productivity, instead of having 100 new mailing list messages waiting for me when I get home I can read them on my phone and then have maybe 1 message that can t be addressed without access to something better than a phone. My backlog of 10,000 unread mailing list messages lasted less than a month after getting an Android phone! A few years ago I got an EeePC 701 that I use for emergency net access when a server goes down. But even a 920g EeePC is more weight than I want to carry, as I need to have a mobile phone anyway there is effectively no extra mass or space used to have a phone capable of running a ssh client. My EeePC doesn t get much use nowadays. A Cheap 27 inch Monitor from Dell Dell Australia is currently selling a 27 inch monitor that does 2560*1440 (WQHD) for $899AU. Dell Australia offers a motor club discount which pretty much everyone in Australia can get as almost everyone is either a member of such a club or knows a member well enough to use their membership number for the discount. This discount reduces the price to $764.15. The availability of such a great cheap monitor has caused me to change my working habits. It doesn t make sense to have a reasonably powerful laptop used in one location for almost all the time when a desktop system with a much better monitor can be used. The Plan Now that my 27 inch monitor has arrived I have to figure out a way of making things work. I still need to work from a laptop on occasion but my main computer use is going to be a smart-phone and a desktop system. Email is already sorted out, I already have three IMAP client systems (netbook, laptop, and phone), adding a desktop system as a fourth isn t going to change anything. The next issue is software development. In the past I haven t used version control systems that much for my hobby work, I have just released a new version every time I had some significant changes. Obviously to support development on two or three systems I need to use a VCS rigorously. I m currently considering Subversion and Git. Subversion is really easy to use (for me), but it seems to be losing popularity. Git is really popular so if I use it for my own projects then I could allow anonymous access for anyone who s interested maybe that will encourage more people to contribute. One thing I haven t even investigated yet is how to manage my web browsing work-flow in a distributed manner. My pattern when using a laptop is to have many windows and tabs open at the same time for issues that I am researching and to only close them days or weeks later when I have finished with the issue. For example if I m buying some new computer gear I will typically open a web browser window with multiple tabs related to the equipment (hardware, software, prices, etc) and keep them all open until I have received it and got it working. Chromium, Mozilla, and presumably other modern web browsers have a facility to reopen windows after a crash. It would be ideal for me if there was some sort of similar facility that allowed me to open the windows that are open on another system and to push window open commands to another system. For example when doing web browsing on my phone I would like to be able to push the URLs of pages that can t be viewed on a phone to my desktop system and have them open waiting for me when I get home. It would be nice if web browsing could be conceptually similar to a remote desktop service in terms of what the user sees. Finally in my home directory there are lots of random files. Probably about half of them could be deleted if I was more organised (disk space is cheap and most of the files are small). For the rest it would be good if they could be accessed from other locations. I have read about people putting the majority of their home directory under version control, but I m not sure that would work well for me. It would be good if I could do something similar with editor sessions, if I had a file open in vi on my desktop before I left home it would be good if I could get a session on my laptop to open the same file (well the same named file checked out of the VCS). Configuring the Desktop System One of the disadvantages of a laptop is that RAID usually isn t viable. With a desktop system software RAID-1 is easy to configure but it results in two disks making heat and noise. For my new desktop system I m thinking of using a DRBD device for /home to store the data locally as well as almost instantly copying it to RAID-1 storage on the server. The main advantage of DRBD over NFS, NBD, and iSCSI is that I can keep working if the server becomes unavailable (EG use the desktop system to ask Google how to fix a server fault). Also with DRBD it s a configuration option to allow synchronous writes to return after the data is written locally which is handy if the server is congested. Another option that I m considering is a diskless system using NBD or iSCSI for all storage. This will prevent using swap (you can t swap to a network device to avoid deadlocks) but that won t necessarily be a problem given the decrease in RAM prices as I can just buy enough RAM to not need swap. The Future Eventually I want to be able to use a tablet for almost everything including software development. While a tablet display isn t going to be great for coding I m sure that I can make use of enough otherwise wasted time to justify the expense. I will probably need a tablet that acts like a regular Linux computer not an Android tablet.

9 May 2011

Jonathan McDowell: A minor keyring-maint rant

This should probably be an official FAQ, but a) I wanted to rant a bit more than is probably acceptable for something "official" and b) the sort of person this information is directed at never bloody reads keyring.debian.org, which is the logical place for it. Who are keyring-maint? Currently Gunnar Wolf (good cop) and Jonathan McDowell (bad cop). Previous keyring maintainers include Igor Grobman & James Troup. I'd like to be a DM/DD. Do I send you my key? No. You go through the DebianMaintainer or NM processes. Then the DM team or DAM tell us to add your key to the appropriate keyring. I'd like to replace my DM/DD key in the Debian keyring. What should I do? Read the instructions at http://keyring.debian.org/replacing_keys.html I have a new key that isn't signed by anyone else, will you accept it? No. Did you read http://keyring.debian.org/replacing_keys.html ? I've got a single DD signature on my new key. That's enough, right? Not unless your old key has been lost and you're getting a different DD to request the replacement for you (and if they're prepared to ask for a key replacement we'll wonder why they're not prepared to sign the new key too). Did you read http://keyring.debian.org/replacing_keys.html ? I'm still really confused about how I should request a key replacement. Help? Try reading https://rt.debian.org/Ticket/Display.html?id=3141 (which just happens to be a recent decent example). Clear subject line (I'd have added a real name too, but it's still fairly clear), full fingerprint of the old and new keys, inline signed so RT doesn't mangle it. New key signed by old key and 3 other DDs. Request signed by old key. That RT link needs a login. I don't have one. Have you tried reading up on the Debian RT system? There's a generic read only login that'll get you access. That's too hard. Can't you just give me the details? Damnit. It appears the read-only login details are currently disabled due to misuse (one wonders how). Try reading http://wiki.debian.org/rt.debian.org Why are you using RT? Isn't bugs.debian.org more appropriate? We need the ability to for people to contact us is in a private fashion, for example if they need to us to remove a key because it's been lost or compromised. We could only use RT for that purpose and use bugs.d.o for things that can be public, but this way all the information is in one place and we get to make the call about when it becomes a publicly viewable ticket. What's with jetring? Should I send you a jetring changeset? jetring is a tool written by Joey Hess that used to be used to manage the Debian Maintainers keyring. keyring-maint borrowed a number of good ideas from jetring but don't use it at all. We ignore jetring changesets. So you just want key fingerprints, not attached keys? Yes. Of course you have to make sure your key is actually on a public keyserver so we can get it. the.earth.li is a good choice (because Jonathan runs it and thus pays more attention to it), but subkeys.pgp.net or pool.sks-keyservers.net are also commonly used. My key has expired and I want to update the key expiry date. I should email RT asking for this to be done, right? No, you should send the updated key via HKP to keyring.debian.org. You can do this with "gpg --keyserver keyring.debian.org --send-key <keyid>" Obviously replace <keyid> with your own key ID. I tried to send an entirely new key via HKP to keyring.debian.org, but I can't see it there. What gives? keyring.debian.org only accepts updates to keys it already knows about. That means you can send updated expiry dates, new uids and new signatures to your existing key, but not an entirely new key. I sent my updated key via HKP to keyring.debian.org and can see it's updated there, but the Debian archive processing tools (eg dak) don't seem to recognize the update. Why not? The updates sent via HKP are folded back into the HKP server automatically every 15 minutes or so. They are folded into the live Debian keyrings on a manual basis, at least once a month. This means if your key has an expiry date then you probably want to update your key at least a month before it expires. Where can I find these live Debian keyrings? They're what's available via rsync from keyring.debian.org::keyrings/keyrings/ This is canonical location for the current Debian Developers and Debian Maintainers keyrings. What about the debian-keyring package? This is a convenience package of the keyrings. It's usually the most out of date. We update it sporadically and try to ensure that the version shipped with a stable Debian release is current at the point of release. It is not used by any of the official Debian infrastructure. Why don't you automatically update my key in the live keyring when I send an update via HKP? We think that automatic updates of keys that allow uploads to Debian are a bad thing and that invoking a human eye at some step of the process is a useful sanity check. Paranoid much? Never enough. How are updates to the keyring tracked? We use bzr to maintain the keyring, with a separate file per key that can then be easily combined into the various keyrings. You can see the repository at: http://bzr.debian.org/scm/loggerhead/keyring/debian-keyring/changes Note that this is only updated when a keyring is pushed to live; the working tree may contain details of compromised keys and thus isn't public. What's with the whole replacement of 1024 bit keys? 2 things. Firstly 1024 bit keys tend to use SHA1 as a hash algorithm, which has been shown to be weaker than expected. While we're not aware of active exploits against this updating all of the keys Debian uses is not a trivial process and it's wiser to get it done /before/ there's a known issue. Secondly computing power has moved on and we feel that upgrading to larger key sizes is prudent. Elliptic curve cryptography (ECC) keys look like the future. Can I use one for Debian? No, not at present. When there are tools that are part of a Debian stable release that support them we'll look into it, after discussion with the major users of the keyring (DSA, ftpmaster, the secretary).

11 March 2011

Pietro Abate: On equivalent debian versions

For one of our experiments, we ended up analyzing all versions mentioned in a Debian Packages file if mentioned in a constraint or in a version field of a package. It seems that there are a lot of DDs that like to use strange 'formatting' when writing down a versioned constraint. This is might be not a choice done directly by the debian maintainer, but just a consequence of a particular version schema from upstream. The debian policy is rigorous with relation to the algorithm to compare version .
 The strings are compared from left to right.
 First the initial part of each string consisting entirely of non-digit characters is determined. These two parts 
 (one of    which may be empty) are compared lexically. If a difference is found it is returned. The lexical 
 comparison is a comparison of ASCII values modified so that all the letters sort earlier than all the non-letters 
 and so that a tilde sorts before anything, even the end of a part. For example, the following parts are in sorted 
 order from earliest to latest: ~~, ~~a, ~, the empty part, a.[34]
 Then the initial part of the remainder of each string which consists entirely of digit characters is determined. 
 The numerical values of these two parts are compared, and any difference found is returned as the result of 
 the comparison. For these purposes an empty string (which can only occur at the end of one or both version 
 strings being compared) counts as zero.
 These two steps (comparing and removing initial non-digit strings and initial digit strings) are repeated until a
 difference is found or both strings are exhausted.
I think the important part is about the numerical comparison The numerical values of these two parts are compared, and any difference found is returned as the result of the comparison . Our tool finds a number of examples of equivalent versions : For example version '0.00001-1' is from package libbenchmark-progressbar-perl. I don't know why so many equivalent way of writing a version string are used. Probably they appear in completely unrelated packages I'm pretty sure no harm is intended :) However, to avoid confusion, it might be a good idea to settle on a non-normative canonical representation of versions. Like no '0' in front of dots and hyphen... Maybe we could add a warning in lintian ? If somebody is interested I can generate a full report that associates each version to a package name.

Next.

Previous.